The President’s Inbox Recap: Governance of Artificial Intelligence
from The Water's Edge

The President’s Inbox Recap: Governance of Artificial Intelligence

The United States is behind its peers when it comes to regulating artificial intelligence (AI).
U.S. President Joe Biden speaks at the White House about artificial intelligence on July 21, 2023.
U.S. President Joe Biden speaks at the White House about artificial intelligence on July 21, 2023. Evelyn Hockstein/REUTERS

Last month, Jim sat down with Kat Duffy, senior fellow for digital and cyberspace policy at the Council. They discussed the opportunities and challenges facing governance on artificial intelligence (AI).

Governance of Artificial Intelligence, With Kat Duffy

Kat Duffy, a senior fellow for digital and cyberspace policy at CFR, sits down with James M. Lindsay to discuss the capacity of the U.S. government to lead in creating a framework for regulating artificial intelligence (AI). 

Play Button Pause Button
0:00 0:00
x
February 12, 2024 — 35:16 min

Here are seven highlights from their conversation:

1.) How governments will regulate AI matters. AI is a potentially transformative technology. It is also a tool that can be used for good—or bad. AI can amplify existing societal problems by scaling existing biases or increasing the spread of disinformation and misinformation. Kat argued that “if we want a technology that can serve society, then we need to think of society as a whole and how that technology can be helped to serve it.”

More on:

Artificial Intelligence (AI)

Technology and Innovation

Censorship and Freedom of Expression

2.) China, the European Union, and the United States have put forth a “triad” of different AI governance approaches. China has imposed strong oversight measures on the country’s AI systems and models. The European Union’s approach “follows a long pattern now of Europe having led in rights-centric and risk-focused digital governance.” The U.S. government, however, has mostly been hands-off when it comes to AI governance.

3.) AI governance in the United States right now is being driven by the private sector. Biden issued an executive order last October calling for government action on AI regulation. Congress, however, has yet to pass significant AI regulations. In the past, most of the innovation in AI came from investments from the government. Today, innovation comes from private companies. Kat argued “there is this real concern that the development of AI has been so divorced from government that the government now has very little control over how that AI is being developed and rolled out into the world.” This is a problem because “corporate governance is designed to manage investor risk.” It is not, however, “designed to manage or to govern societal risk, and in fact, may be antithetical to it.”

4.) AI governance does not have to interrupt innovation and progress in emerging technologies, nor should it. Kat argued that “there's a lot of room when we talk about governance, so much of the conversation right now is around carrots instead of sticks, but there's so much room for creative, interesting, and proactive government engagement here in addition to just regulation or executive orders.” Kat pointed to the need for “healthy tension” between room for innovation and regulatory requirements.

5.) The United States can still lead on AI. Kat rejected what she called a “false dichotomy” between the existence of guardrails and the ability to innovate. “When you put boundaries on any creative process, you also just encourage creativity in different ways,” Kat argued. “You can streamline investment, you can make it more efficient, and you can waste fewer resources and you can move faster.” She also added the United States could be doing more to ensure global connectivity in lower income countries.

6.) The regulation of AI in the United States poses important First Amendment questions. Kat pointed to two instances that capture the use of artificial intelligence as a tool to create images but have different implications for our society. For example, AI could be used to create fake images that are clearly parodies, or it could be used to produce fake explicit images of female politicians. As Kat pointed out, “both of those are the same thing from a technical standpoint, but they come from different places, and they mean different things in terms of what we value societally.” But it’s unclear how much leeway the First Amendment gives government efforts to push social media platforms to take down false or potentially defamatory information. The Supreme Court will be addressing that question in the case of Murthy v. Missouri, which asks whether the executive branch overstepped its authority in asking social media platforms to take down disinformation about Covid-19 and the 2020 election. Kat pointed out, “In this moment, we may actually be the only government in the world that thinks it can't go to those social media companies to talk about those things.”

More on:

Artificial Intelligence (AI)

Technology and Innovation

Censorship and Freedom of Expression

7.) The Supreme Court’s decision on the Chevron doctrine has major implications for AI governance. The Chevron Doctrine refers to the existing norm that the executive branch can interpret statues passed by Congress. As Kat explained, “so long as that interpretation is reasonable, the courts, the judicial branch, will give deference to the executive branch's interpretation of a statute.” If the Supreme Court curtails or jettisons the Chevron doctrine, then judges could become the “arbiters of a statue.” She added, “if you think that the legislative branch has moved slowly, think about the judicial branch.”

If you’re looking to read more of Kat’s work, check out her blog post for CFR.org that explores the use of AI in the Indonesian elections. She also co-wrote a piece for Foreign Affairs on “Defending the Year of Democracy.” It analyzes how issues like AI, cybersecurity, and internet governance will impact the 80-plus elections taking place this year.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail